Computers make us human

Most of the Lambda Feedback ‘Pioneers’ who are deploying the software in their practice, and collaborating on its development.

Educational institutions (as opposed to, for example, apprenticeships or parenting) are limited in the amount of time a teacher can spend with a student. What happens during that limited time together? Could any of that activity be automated, so that the time together is higher quality?

A Venn diagram of what teachers and computers can do

The diagram above lists some qualities and abilities of teachers (people) and computers (machines). Any student-teacher contact time spent on the central region of the diagram above can be automated with the dual benefit of (a) freeing up teachers for the more human-centric qualities and activities on the left of the diagram; but also (b) the computer can perform the automated tasks with higher quality (timeliness, accuracy, personalisation, quantity).

In other words we should be making sure that contact time is quality time.

I’m leading a project at Imperial College London called Lambda Feedback which aims to move in this direction, automating low level feedback to improve feedback during homework and to enable higher quality contact time. The project is 6 months into a 3.5 year plan and in this article I share some of the questions and partial answers that we have defined.

The long term goal is rich, personalised formative feedback at the time of doing homework.

What is good automated feedback?

It goes without saying that feedback should usually give information on correctness. This is a non-trivial task in itself; but this binary feedback is only the beginning – we can give more qualitative feedback too.

If a computer gives feedback to a student, the constraints are different to when a teacher gives feedback. We need to develop the art of constructive feedback when it is automated:

  • is saying ‘good try’ an effective message if it’s automatic?
  • when is it ethical to give advice automatically or even make a decision on behalf of a teacher (e.g. which exercise to do)?
  • when should feedback include ‘hints’ or reveal other information that may, or may not, compromise the learning experience? (e.g. ruining the discovery)
  • how do people respond to mathematical and/or informal feedback when we know it was automated?
  • what risks and opportunities are there to make automated feedback a force for bad/good? For example encouraging good thinking habits; being inclusive; building a community; achieving the graduate attributes.
  • Who should decide what and how feedback is given to a student?

In this project students and staff are providing input throughout the design of the software. This engagement helps us prioritise. For example, autonomy is important to teachers. Content and feedback is curated by the teacher according to their pedagogy; students are in control of their experience according to their study preferences.

Data driven feedback

In our approach teachers will develop feedback based on student responses that they gather, which often initially look very diverse. Applying mathematical rules helps group responses and prioritise popular cases for specific feedback. Below is an example.

An example of data from student inputs, simplified mathematically to find common cases that would benefit from feedback.

The software architecture we are using is micro-service based. The basic web app follows standard practice, and the functions for evaluating student inputs are kept separate so that they can be developed independently. Evaluation can be developed by the community to be efficient and reliable, using any language or server architecture necessary.

A diagram of our software archiecture

In our experience of using available software for this application, teachers who code their own evaluation functions often end up tied up in knots with problems. A dedicated community needs to develop these functions and make them available to teachers in a usable way. Our evaluation functions are developed in public repositories where anyone can contribute; but our software also connects to other communities’ servers to use their functions wherever possible too.

Example problem

Providing feedback is a non-trivial problem. We cannot simply ‘ask a computer’ for feedback – the algorithms are an area of research. For example, consider the following problem:

An example problem: factorise a cubic expression.

How can an ‘evaluation function’ process a student response to this question? This is partly a question of specificity and sensitivity: accurately identifying correctness or incorrectness of the student response.

This problem is more subtle than mathematical equivalence. For example, in the question given here, we could use the following computation to determine the correctness of a student response:

student_response == teacher_answer

The Boolean outcome (‘true’ or ‘false’) can be delivered to the student. However, this approach fails in at least two ways:

  • If the student inputs the question itself (the unfactorised expression) – it will be inaccurately evaluated as ‘correct’.
  • If the student enters an answer which is essentially correct but contains a syntax error – it will be inaccurately evaluated as ‘incorrect’.

The evaluation function therefore requires a more sophisticated mathematical and computational approach.

I will post on this blog as this project develops. The above is summarised in a poster below.

A preview of a poster about the Lambda Feedback project.

One thought on “Computers make us human

Leave a comment

Design a site like this with WordPress.com
Get started